张量鲁棒主成分分析(TRPCA)是机器学习和计算机视觉中的基本模型。最近,张力列车(TT)分解已经过验证了捕获张量恢复任务的全局低秩相关性。然而,由于现实世界应用中的大规模张量数据,之前的TRPCA模型经常遭受高计算复杂性。在这封信中,我们提出了一个高效的TRPCA,在Tucker和TT的混合模型下。具体地,理论上我们揭示了原始大张量的TT核规范(TTNN)可以通过Tucker压缩格式等同地转换为更小的张量,从而显着降低了奇异值分解(SVD)的计算成本。合成和现实世界张量数据的数值实验验证了所提出的模型的优越性。
translated by 谷歌翻译
Neuroevolution has greatly promoted Deep Neural Network (DNN) architecture design and its applications, while there is a lack of methods available across different DNN types concerning both their scale and performance. In this study, we propose a self-adaptive neuroevolution (SANE) approach to automatically construct various lightweight DNN architectures for different tasks. One of the key settings in SANE is the search space defined by cells and organs self-adapted to different DNN types. Based on this search space, a constructive evolution strategy with uniform evolution settings and operations is designed to grow DNN architectures gradually. SANE is able to self-adaptively adjust evolution exploration and exploitation to improve search efficiency. Moreover, a speciation scheme is developed to protect evolution from early convergence by restricting selection competition within species. To evaluate SANE, we carry out neuroevolution experiments to generate different DNN architectures including convolutional neural network, generative adversarial network and long short-term memory. The results illustrate that the obtained DNN architectures could have smaller scale with similar performance compared to existing DNN architectures. Our proposed SANE provides an efficient approach to self-adaptively search DNN architectures across different types.
translated by 谷歌翻译
Predicting the health risks of patients using Electronic Health Records (EHR) has attracted considerable attention in recent years, especially with the development of deep learning techniques. Health risk refers to the probability of the occurrence of a specific health outcome for a specific patient. The predicted risks can be used to support decision-making by healthcare professionals. EHRs are structured patient journey data. Each patient journey contains a chronological set of clinical events, and within each clinical event, there is a set of clinical/medical activities. Due to variations of patient conditions and treatment needs, EHR patient journey data has an inherently high degree of missingness that contains important information affecting relationships among variables, including time. Existing deep learning-based models generate imputed values for missing values when learning the relationships. However, imputed data in EHR patient journey data may distort the clinical meaning of the original EHR patient journey data, resulting in classification bias. This paper proposes a novel end-to-end approach to modeling EHR patient journey data with Integrated Convolutional and Recurrent Neural Networks. Our model can capture both long- and short-term temporal patterns within each patient journey and effectively handle the high degree of missingness in EHR data without any imputation data generation. Extensive experimental results using the proposed model on two real-world datasets demonstrate robust performance as well as superior prediction accuracy compared to existing state-of-the-art imputation-based prediction methods.
translated by 谷歌翻译
联合学习(FL)是一个分布式的机器学习框架,可以减轻数据孤岛,在该筒仓中,分散的客户在不共享其私人数据的情况下协作学习全球模型。但是,客户的非独立且相同分布的(非IID)数据对训练有素的模型产生了负面影响,并且具有不同本地更新的客户可能会在每个通信回合中对本地梯度造成巨大差距。在本文中,我们提出了一种联合矢量平均(FedVeca)方法来解决上述非IID数据问题。具体而言,我们为与本地梯度相关的全球模型设定了一个新的目标。局部梯度定义为具有步长和方向的双向向量,其中步长为局部更新的数量,并且根据我们的定义将方向分为正和负。在FedVeca中,方向受步尺的影响,因此我们平均双向向量,以降低不同步骤尺寸的效果。然后,我们理论上分析了步骤大小与全球目标之间的关系,并在每个通信循环的步骤大小上获得上限。基于上限,我们为服务器和客户端设计了一种算法,以自适应调整使目标接近最佳的步骤大小。最后,我们通过构建原型系统对不同数据集,模型和场景进行实验,实验结果证明了FedVeca方法的有效性和效率。
translated by 谷歌翻译
由于患者状况和治疗需求的变化,电子健康记录(EHR)表现出大量缺失数据。缺失价值的插补被认为是应对这一挑战的有效方法。现有的工作将插补方法和预测模型分为基于EHR的机器学习系统的两个独立部分。我们通过利用复合密度网络(CDNET)提出了一种集成的端对端方法,该方法允许插入方法和预测模型在单个框架中调整在一起。 CDNET由一个封闭式复发单元(GRU),混合物密度网络(MDN)和正则注意网络(RAN)组成。 GRU用作对EHR数据进行建模的潜在变量模型。 MDN旨在采样GRU生成的潜在变量。该运行是适用于较不可靠的估算值的正规器。 CDNET的结构使GRU和MDN迭代地利用彼此的输出来估算缺失值,从而导致更准确,更健壮的预测。我们验证cdnet关于模拟III数据集的死亡率预测任务。我们的模型以大幅度的利润率优于最先进的模型。我们还从经验上表明,正规化值是出色预测性能的关键因素。对预测不确定性的分析表明,我们的模型可以同时捕获核心和认知不确定性,从而使模型用户更好地了解模型结果。
translated by 谷歌翻译
最近,作品数量表明,通过使用视觉信息,可以在一定程度上改进神经机器翻译(NMT)的性能。但是,这些结论中的大多数是根据基于有限的双语句子图像对的实验结果的分析得出的,例如Multi30k。在这类数据集中,必须通过手动注释的图像很好地表示一个双语平行句子对的内容,这与实际翻译情况不同。提出了一些先前的作品,以通过从退出的句子图像对中检索图像与主题模型来解决问题。但是,由于他们使用的句子图像对收集有限,因此很难处理其图像检索方法,并且很难证明视觉信息增强了NMT,而不是图像和图像的共发生句子。在本文中,我们提出了一种开放式摄影图像检索方法,以使用图像搜索引擎收集双语平行语料库的描述性图像。接下来,我们提出文本感知的专注视觉编码器,以过滤错误收集的噪声图像。多30K和其他两个翻译数据集的实验结果表明,我们提出的方法对强基础可取得重大改进。
translated by 谷歌翻译
基于电子健康记录(EHR)的健康预测建筑模型已成为一个活跃的研究领域。 EHR患者旅程数据由患者定期的临床事件/患者访问组成。大多数现有研究的重点是建模访问之间的长期依赖性,而无需明确考虑连续访问之间的短期相关性,在这种情况下,将不规则的时间间隔(并入为辅助信息)被送入健康预测模型中以捕获患者期间的潜在渐进模式。 。我们提出了一个具有四个模块的新型深神经网络,以考虑各种变量对健康预测的贡献:i)堆叠的注意力模块在每个患者旅程中加强了临床事件中的深层语义,并产生访问嵌入,ii)短 - 术语时间关注模块模型在连续访问嵌入之间的短期相关性,同时捕获这些访问嵌入中时间间隔的影响,iii)长期时间关注模块模型的长期依赖模型,同时捕获时间间隔内的时间间隔的影响这些访问嵌入,iv),最后,耦合的注意模块适应了短期时间关注和长期时间注意模块的输出,以做出健康预测。对模拟III的实验结果表明,与现有的最新方法相比,我们的模型的预测准确性以及该方法的可解释性和鲁棒性。此外,我们发现建模短期相关性有助于局部先验的产生,从而改善了患者旅行的预测性建模。
translated by 谷歌翻译
提出了一种自动编码器(AE)极限学习机(ELM)-AE-ELM模型,以根据相互信息算法(MI),AE和ELM的组合来预测NOX发射浓度。首先,实用变量的重要性由MI算法计算,并分析了该机制以确定与NOX发射浓度相关的变量。然后,进一步分析了所选变量与NOX发射浓度之间的时间延迟相关性,以重建建模数据。随后,将AE应用于输入变量中的隐藏特征。最后,ELM算法建立了NOX发射浓度与深度特征之间的关系。实用数据的实验结果表明,与最先进的模型相比,提出的模型显示出有希望的性能。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译